Finite dimensional algorithms for the hidden Markov model multi-armed bandit problem

نویسندگان

  • Vikram Krishnamurthy
  • Josipa Mickova
چکیده

The multi-arm bandit problem is widely used in scheduling of traffic in broadband networks, manufacturing systems and robotics. This paper presents a finite dimensional optimal solution to the multi-arm bandit problem for Hidden Markov Models. The key to solving any multi-arm bandit problem is to compute the Gittins index. In this paper a finite dimensional algorithm is presented which exactly computes the Gittins index. Suboptimal algorithms for computing the Gittins index are also presented and experimentally shown to perform almost as well as the optimal method. Finally an application of the algorithms to tracking multiple targets with a single intelligent sensor is presented.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hidden Markov model multiarm bandits: a methodology for beam scheduling in multitarget tracking

In this paper, we derive optimal and suboptimal beam scheduling algorithms for electronically scanned array tracking systems. We formulate the scheduling problem as a multiarm bandit problem involving hidden Markov models (HMMs). A finite-dimensional optimal solution to this multiarm bandit problem is presented. The key to solving any multiarm bandit problem is to compute the Gittins index. We ...

متن کامل

A Value Iteration Algorithm for Partially Observed Markov Decision Process Multi-armed Bandits

A value iteration based algorithm is given for computing the Gittins index of a Partially Observed Markov Decision Process (POMDP) Multi-armed Bandit problem. This problem concerns dynamical allocation of efforts between a number of competing projects of which only one can be worked on at any time period. The active project evolves according to a finite state Markov chain and generates then a r...

متن کامل

Correction to "Hidden Markov model multiarm bandits: a methodology for beam scheduling in multitarget tracking"

We have discovered an error in the return-to-state formulation of the HMM multi-armed bandit problem in our recently published paper [4]. This note briefly outlines the error in [4] and describes a computationally simpler solution. Complete details including proofs of this simpler solution appear in the already submitted paper [3]. The error in [4] is in the return-to-state argument given in Re...

متن کامل

On the Whittle Index for Restless Multi-armed Hidden Markov Bandits

We consider a restless multi-armed bandit in which each arm can be in one of two states. When an arm is sampled, the state of the arm is not available to the sampler. Instead, a binary signal with a known randomness that depends on the state of the arm is available. No signal is available if the arm is not sampled. An arm-dependent reward is accrued from each sampling. In each time step, each a...

متن کامل

On Robust Arm-Acquiring Bandit Problems

In the classical multi-armed bandit problem, at each stage, the player has to choose one from N given projects (arms) to generate a reward depending on the arm played and its current state. The state process of each arm is modeled by a Markov chain and the transition probability is priorly known. The goal of the player is to maximize the expected total reward. One variant of the problem, the so...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999